Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
1.
IEEE Internet of Things Journal ; : 1-1, 2023.
Article in English | Scopus | ID: covidwho-2293083

ABSTRACT

Coronavirus disease 2019 (COVID-19) has been challenged specifically with the new variant. The number of patients seeking treatment has increased significantly, putting tremendous pressure on hospitals and healthcare systems. With the potential of artificial intelligence (AI) to leverage clinicians to improve personalized medicine for COVID-19, we propose a deep learning model based on 1D and 3D convolutional neural networks (CNNs) to predict the survival outcome of COVID-19 patients. Our model consists of two CNN channels that operate with CT scans and the corresponding clinical variables. Specifically, each patient data set consists of CT images and the corresponding 44 clinical variables used in the 3D CNN and 1D CNN input, respectively. This model aims to combine imaging and clinical features to predict short-term from long-term survival. Our models demonstrate higher performance metrics compared to state-of-the-art models with AUC-ROC of 91.44 –91.60% versus 84.36 –88.10% and Accuracy of 83.39 –84.47% versus 79.06 –81.94% in predicting the survival groups of patients with COVID-19. Based on the findings, the combined clinical and imaging features in the deep CNN model can be used as a prognostic tool and help to distinguish censored and uncensored cases of COVID-19. IEEE

2.
IEEE Access ; 11:28856-28872, 2023.
Article in English | Scopus | ID: covidwho-2305971

ABSTRACT

Coronavirus disease 2019, commonly known as COVID-19, is an extremely contagious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Computerised Tomography (CT) scans based diagnosis and progression analysis of COVID-19 have recently received academic interest. Most algorithms include two-stage analysis where a slice-level analysis is followed by the patient-level analysis. However, such an analysis requires labels for individual slices in the training data. In this paper, we propose a single-stage 3D approach that does not require slice-wise labels. Our proposed method comprises volumetric data pre-processing and 3D ResNet transfer learning. The pre-processing includes pulmonary segmentation to identify the regions of interest, volume resampling and a novel approach for extracting salient slices. This is followed by proposing a region-of-interest aware 3D ResNet for feature learning. The backbone networks utilised in this study include 3D ResNet-18, 3D ResNet-50 and 3D ResNet-101. Our proposed method employing 3D ResNet-101 has outperformed the existing methods by yielding an overall accuracy of 90%. The sensitivity for correctly predicting COVID-19, Community Acquired Pneumonia (CAP) and Normal class labels in the dataset is 88.2%, 96.4% and 96.1%, respectively. © 2013 IEEE.

3.
IEEE Transactions on Automation Science and Engineering ; 20(1):649-661, 2023.
Article in English | Scopus | ID: covidwho-2239779

ABSTRACT

The COVID-19 pandemic shows growing demand of robots to replace humans for conducting multiple tasks including logistics, patient care, and disinfection in contaminated areas. In this paper, a new autonomous disinfection robot is proposed based on aerosolized hydrogen peroxide disinfection method. Its unique feature lies in that the autonomous navigation is planned by developing an atomization disinfection model and a target detection algorithm, which enables cost-effective, point-of-care, and full-coverage disinfection of the air and surface in indoor environment. A prototype robot has been fabricated for experimental study. The effectiveness of the proposed concept design for automated indoor environmental disinfection has been verified with air and surface quality monitoring provided by a qualified third-party testing agency. Note to Practitioners - Robots are desirable to reduce the risk of human infection of highly contagious virus. For such purpose, a novel autonomous disinfection robot is designed herein for automated disinfection of air and surface in indoor environment. The robot structure consists of a mobile carrier platform and an atomizer disinfection module. The disinfection modeling is conducted by using the measurement data provided by a custom-built PM sensor array. To achieve cost-effective and qualified disinfection, a full-coverage path planning scheme is proposed based on the established disinfection model. Moreover, for specifically disinfecting the frequently contacted objects (e.g., tables and chairs in offices and hospitals), a target perception algorithm is proposed to mark the localization of these objects in the map, which are disinfected by the robot more carefully in these marked areas. Experimental results indicate that the developed disinfection robot offers great effectiveness to fight against the COVID-19 pandemic. © 2004-2012 IEEE.

4.
IEEE Access ; : 2023/01/01 00:00:00.000, 2023.
Article in English | Scopus | ID: covidwho-2234580

ABSTRACT

COVID-19 has affected many people across the globe. Though vaccines are available now, early detection of the disease plays a vital role in the better management of COVID-19 patients. An Artificial Neural Network (ANN) powered Computer Aided Diagnosis (CAD) system can automate the detection pipeline accounting for accurate diagnosis, overcoming the limitations of manual methods. This work proposes a CAD system for COVID-19 that detects and classifies abnormalities in lung CT images using Artificial Bee Colony (ABC) optimised ANN (ABCNN). The proposed ABCNN approach works by segmenting the suspicious regions from the CT images of non-COVID and COVID patients using an ABC optimised region growing process and extracting the texture and intensity features from those suspicious regions. Further, an optimised ANN model whose input features, initial weights and hidden nodes are optimised using ABC optimisation classifies those abnormal regions into COVID and non-COVID classes. The proposed ABCNN approach is evaluated using the lung CT images collected from the public datasets. In comparison to other available techniques, the proposed ABCNN approach achieved a high classification accuracy of 92.37% when evaluated using a set of 470 lung CT images. Author

5.
Revista Iberoamericana de Tecnologias del Aprendizaje ; : 1-1, 2022.
Article in English | Scopus | ID: covidwho-1985492

ABSTRACT

The transformation of education through emerging technologies has been an imperative due to the pandemic of COVID-19, which has forced higher education institutions to propose strategies to provide better experiences for their students. The objective of this research was to identify the challenges and opportunities of using WebVR tools for the development of academic activities, with the intention of interpreting how these technological tools are combined with educational practices among teachers and students. The context was placed in an activity of a graduate level course in online format, in a private higher education institution in Mexico. A qualitative study was conducted with a case study design to account for the perception of the participants regarding the value of the user experience, the analysis of the integrated tools and the contribution of this WebVR tool to the development of competencies in educational practice. The results show that if the technical requirements and a basic level of appropriation of digital competencies are met, including this type of emerging technology will bring benefits to the educational practice, such as the development of transversal and disciplinary competencies, improvement of interaction and socialization of participants, as well as motivation by incorporating playful elements. IEEE

6.
IEEE Access ; 2022.
Article in English | Scopus | ID: covidwho-1741135

ABSTRACT

Objective: The adoption of telehealth rapidly accelerated due to the global COVID19 pandemic disrupting communities and in-person healthcare practices. While telehealth had initial benefits in enhancing accessibility for remote treatment, physical rehabilitation has been heavily limited due to the loss of hands-on evaluation tools. This paper presents an immersive virtual reality (iVR) pipeline for replicating physical therapy success metrics through applied machine learning of patient observation. Methods: We demonstrate a method of training gradient boosted decision-trees for kinematic estimation to replicate mobility and strength metrics with an off-the-shelf iVR system. During a two-month study, training data was collected while a group of users completed physical rehabilitation exercises in an iVR game. Utilizing this data, we trained on iVR based motion capture data and OpenSim biomechanical simulations. Results: Our final model indicates that upper-extremity kinematics from OpenSim can be accurately predicted using the HTC Vive head-mounted display system with a Mean Absolute Error less than 0.78 degrees for joint angles and less than 2.34 Nm for joint torques. Additionally, these predictions are viable for run-time estimation, with approximately a 0.74 ms rate of prediction during exercise sessions. Conclusion: These findings suggest that iVR paired with machine learning can serve as an effective medium for collecting evidence-based patient success metrics in telehealth. Significance: Our approach can help increase the accessibility of physical rehabilitation with off-the-shelf iVR head-mounted display systems by providing therapists with metrics needed for remote evaluation. Author

SELECTION OF CITATIONS
SEARCH DETAIL